The search functionality is under construction.

Keyword Search Result

[Keyword] neural networks(287hit)

261-280hit(287hit)

  • A Current-Mode Implementation of a Chaotic Neuron Model Using a SI Integrator

    Nobuo KANOU  Yoshihiko HORIO  Kazuyuki AIHARA  Shogo NAKAMURA  

     
    LETTER-Nonlinear Circuits and Systems

      Vol:
    E77-A No:1
      Page(s):
    335-338

    This paper presents an improved current-mode circuit for implementation of a chaotic neuron model. The proposed circuit uses a switched-current integrator and a nonlinear output function circuit, which is based on an operational transconductance amplifier, as building blocks. Is is shown by SPICE simulations and experiments using discrete elements that the proposed circuit well replicates the behavior of the chaotic neuron model.

  • Identification of Chaotic Dynamical Systems with Back-Propagation Neural Networks

    Masaharu ADACHI  Makoto KOTANI  

     
    PAPER-Nonlinear Phenomena and Analysis

      Vol:
    E77-A No:1
      Page(s):
    324-334

    In this paper, we clarify fundamental properties of conventional back-propagation neural networks to learn chaotic dynamical systems by some numerical experiments. We train three-layers networks using back-propagation algorithm with the data from two examples of two-dimensional discrete dynamical systems. We qualitatively evaluate the trained networks with two methods analysing geometrical mapping structure and reconstruction of an attractor by the recurrent feedback of the networks. We also quantitatively evaluate the trained networks with calculation of the Lyapunov exponents that represent the dynamics of the recurrent networks is chaotic or periodic. In many cases, the trained networks show high ability of extracting mapping structures of original two-dimensional dynamical systems. We confirm that the Lyapunov exponents of the trained networks correspond to whether the reconstructed attractors by the recurrent networks are chaotic or periodic.

  • Physiologically-Based Speech Synthesis Using Neural Networks

    Makoto HIRAYAMA  Eric Vatikiotis-BATESON  Mitsuo KAWATO  

     
    PAPER

      Vol:
    E76-A No:11
      Page(s):
    1898-1910

    This paper focuses on two areas in our effort to synthesize speech from neuromotor input using neural network models that effect transforms between cognitive intentions to speak, their physiological effects on vocal tract structures, and subsequent realization as acoustic signals. The first area concerns the biomechanical transform between motor commands to muscles and the ensuing articulator behavior. Using physiological data of muscle EMG (electromyography) and articulator movements during natural English speech utterances, three articulator-specific neural networks learn the forward dynamics that relate motor commands to the muscles and motion of the tongue, jaw, ant lips. Compared to a fully-connected network, mapping muscle EMG and motion for all three sets of articulators at once, this modular approach has improved performance by reducing network complexity and has eliminated some of the confounding influence of functional coupling among articulators. Network independence has also allowed us to identify and assess the effects of technical and empirical limitations on an articulator-by-articulator basis. This is particularly important for modeling the tongue whose complex structure is very difficult to examine empirically. The second area of progress concerns the transform between articulator motion and the speech acoustics. From the articulatory movement trajectories, a second neural network generates PARCOR (partial correlation) coefficients which are then used to synthesize the speech acoustics. In the current implementation, articulator velocities have been added as the inputs to the network. As a result, the model now follows the fast changes of the coefficients for consonants generated by relatively slow articulatory movements during natural English utterances. Although much work still needs to be done, progress in these areas brings us closer to our goal of emulating speech production processes computationally.

  • Generalization Ability of Extended Cascaded Artificial Neural Network Architecture

    Joarder KAMRUZZAMAN  Yukio KUMAGAI  Hiromitsu HIKITA  

     
    LETTER-Neural Networks

      Vol:
    E76-A No:10
      Page(s):
    1877-1883

    We present an extension of the previously proposed 3-layer feedforward network called a cascaded network. Cascaded networks are trained to realize category classification employing binary input vectors and locally represented binary target output vectors. To realize a nonlinearly separable task the extended cascaded network presented here is consreucted by introducing high order cross producted inputs at the input layer. In the construction of the cascaded network, two 2-layer networks are first trained independently by delta rule and then cascaded. After cascading, the intermediate layer can be understood as a hidden layer which is trained to attain preassigned saturated outputs in response to the training set. In a cascaded network trained to categorize binary image patterns, saturation of hidden outputs reduces the effect of corrupted disturbances presented in the input. We demonstrated that the extended cascaded network was able to realize a nonlinearly separable task and yielded better generalization ability than the Backpropagation network.

  • Exploiting Parallelism in Neural Networks on a Dynamic Data-Driven System

    Ali M. ALHAJ  Hiroaki TERADA  

     
    PAPER-Neural Networks

      Vol:
    E76-A No:10
      Page(s):
    1804-1811

    High speed simulation of neural networks can be achieved through parallel implementations capable of exploiting their massive inherent parallelism. In this paper, we show how this inherent parallelism can be effectively exploited on parallel data-driven systems. By using these systems, the asynchronous parallelism of neural networks can be naturally specified by the functional data-driven programs, and maximally exploited by pipelined and scalable data-driven processors. We shall demonstrate the suitability of data-driven systems for the parallel simulation of neural networks through a parallel implementation of the widely used back propagation networks. The implementation is based on the exploitation of the network and training set parallelisms inherent in these networks, and is evaluated using an image data compression network.

  • On the Multiuser Detection Using a Neural Network in Code-Division Multiple-Access Communications

    Teruyuki MIYAJIMA  Takaaki HASEGAWA  Misao HANEISHI  

     
    PAPER

      Vol:
    E76-B No:8
      Page(s):
    961-968

    In this paper we consider multiuser detection using a neural network in a synchronous code-division multiple-access channel. In a code-division multiple-access channel, a matched filter is widely used as a receiver. However, when the relative powers of the interfering signals are large, i.e. the near-far problem, the performances of the matched filter receiver degrade. Although the optimum receiver for multiuser detection is superior to the matched filter receiver in such situations, the optimum receiver is too complex to be implemented. A simple technique to implement the optimum multiuser detection is required. Recurrent neural networks which consist of a number of simple processing units can rapidly provide a collectively-computed solution. Moreover, the network can seek out a minimum in the energy function. On the other hand, the optimum multiuser detection in a synchronous channel is carried out by the maximization of a likelihood function. In this paper, it is shown that the energy function of the neural network is identical to the likelihood function of the optimum multiuser detection and the neural network can be used to implement the optimum multiuser detection. Performance comparisons among the optimum receiver, the matched filter one and the neural network one are carried out by computer simulations. It is shown that the neural network receiver has a capability to achieve near-optimum performance in several situations and local minimum problems are few serious.

  • Learning of a Multi-Valued Neural Network and Its Application

    Ryuzo TAKIYAMA  Koichiro KUBO  

     
    PAPER-Nonlinear Circuits and Neural Nets

      Vol:
    E76-A No:6
      Page(s):
    873-877

    A learning procedure of a three layer neural network with limited structure, called a multi-valued neural network, is proposed. The three layer net has a single linear neuron in its output layer. All input weights of a number of hidden neurons are identical. The network takes k+1 distinct stable values, where k is the number of hidden neurons. The proposed learning procedure consists of two parts, Phase and Phase . The former is one for the learning of weights between the hidden and output layers, and the latter is one for those between the input and the hidden layers. The network is applied to classification of numerals, which shows the effectiveness of the proposed learning procedure.

  • Hardware Implementation of the Multifrequency Oscillation Learning Method for Analog Neural Networks

    Hiroshi MIYAO  Masafumi KOGA  Takao MATSUMOTO  

     
    LETTER-Bio-Cybernetics

      Vol:
    E76-D No:6
      Page(s):
    717-728

    High-speed learning of neural networks using the multifrequency oscillation method is demonstrated for first time. During the learning of an analog neural network integrated circuit implementing the exclusive-OR' logic, weight and threshold values converge to steady states within 2 ms for a learning speed of 2 mega-patterns per second.

  • Robust Performance Using Cascaded Artificial Neural Network Architecture

    Joarder KAMRUZZAMAN  Yukio KUMAGAI  Hiromitsu HIKITA  

     
    LETTER-Digital Signal Processing

      Vol:
    E76-A No:6
      Page(s):
    1023-1030

    It has been reported that generalization performance of multilayer feedformard networks strongly depends on the attainment of saturated hidden outputs in response to the training set. Usually standard Backpropagation (BP) network mostly uses intermediate values of hidden units as the internal representation of the training patterns. In this letter, we propose construction of a 3-layer cascaded network in which two 2-layer networks are first trained independently by delta rule and then cascaded. After cascading, the intermediate layer can be viewed as hidden layer which is trained to attain preassigned saturated outputs in response to the training set. This network is particularly easier to construct for linearly separable training set, and can also be constructed for nonlinearly separable tasks by using higher order inputs at the input layer or by assigning proper codes at the intermediate layer which can be obtained from a trained Fahlman and Lebiere's network. Simulation results show that, at least, when the training set is linearly separable, use of the proposed cascaded network significantly enhances the generalization performance compared to BP network, and also maintains high generalization ability for nonlinearly separable training set. Performance of cascaded network depending on the preassigned codes at the intermediate layer is discussed and a suggestion about the preassigned coding is presented.

  • A Generalized Unsupervised Competitive Learning Scheme

    Ferdinand PEPER  Hideki NODA  

     
    PAPER-Neural Networks

      Vol:
    E76-A No:5
      Page(s):
    834-841

    In this article a Neural Network learning scheme is described, which is a generalization of VQ (Vector Quantization) and ART2a (a simplified version of Adaptive Resonance Theory 2). The basic differences between VQ and ART2a will be exhibited and it will be shown how these differences are covered by the generalized scheme. The generalized scheme enables a rich set of variations on VQ and ART2a. One such variation uses the expression ||I||2+||zj||2/||zj||sin(I,zj), as the distance measure between input vector I and weight vector zj. This variation tends to be more robust to noise than ART2a, as is shown by experiments we performed. These experiments use the same data-set as the ART2a experiments in Ref.(3).

  • Induction Motor Modelling Using Multi-Layer Perceptrons

    Paolo ARENA  Luigi FORTUNA  Antonio GALLO  Salvatore GRAZIANI  Giovanni MUSCATO  

     
    PAPER-Neural Nets--Theory and Applications--

      Vol:
    E76-A No:5
      Page(s):
    761-771

    Asynchronous machines are a topic of great interest in the research area of actuators. Due to the complexity of these systems and to the required performance, the modelling and control of asynchronous machines are complex questions. Problems arise when the control goals require accurate descriptions of the electric machine or when we need to identify some electrical parameters; in the models employed it becomes very hard to take into account all the phenomena involved and therefore to make the error amplitude adequately small. Moreover, it is well known that, though an efficient control strategy requires knowledge of the flux vector, direct measurement of this quantity, using ad hoc transducers, does not represent a suitable approach, because it results in expensive machines. It is therefore necessary to perform an estimation of this vector, based on adequate dynamic non-linear models. Several different strategies have been proposed in literature to solve the items in a suitable manner. In this paper the authors propose a neural approach both to derive NARMAX models for asynchronous machines and to design non-linear observers: the need to use complex models that may be inefficient for control aims is therefore avoided. The results obtained with the strategy proposed were compared with simulations obtained by considering a classical fifth-order non-linear model.

  • Abrupt Variations of Attractors Caused by Argumental Discreteness in Non-Hermitian Associative Memories

    Akira HIROSE  

     
    LETTER-Neural Nets--Theory and Applications--

      Vol:
    E76-A No:5
      Page(s):
    777-779

    Abrupt variations of attractors caused by argumental discreteness in non-Hermitian complex-valued neural networks are reported. When we apply the complex-valued associative memories to dynamical processing, the weighting matrices are constructed as non-Hermitian in general so that they have motive force to the signal vectors. It is observed that competitions between argumental rotation force and noise-suppression ability of associative memories lead to trajectory distortions and abrupt variations of the attractors.

  • Neural Network Configuration for Multiple Sound Source Location and Its Performance

    Shinichi SATO  Takuro SATO  Atsushi FUKASAWA  

     
    PAPER-Neural Nets--Theory and Applications--

      Vol:
    E76-A No:5
      Page(s):
    754-760

    The method of estimating multiple sound source locations based on a neural network algorithm and its performance are described in this paper. An evaluation function is first defined to reflect both properties of sound propagation of spherical wave front and the uniqueness of solution. A neural network is then composed to satisfy the conditions for the above evaluation function. Locations of multiple sources are given as exciting neurons. The proposed method is evaluated and compared with the deterministic method based on the Hyperbolic Method for the case of 8 sources on a square plane of 200m200m. It is found that the solutions are obtained correctly without any pseudo or dropped-out solutions. The proposed method is also applied to another case in which 54 sound sources are composed of 9 sound groups, each of which contains 6 sound sources. The proposed method is found to be effective and sufficient for practical application.

  • An Automatic Adjustment Method of Backpropagation Learning Parameters, Using Fuzzy Inference

    Fumio UENO  Takahiro INOUE  Kenichi SUGITANI  Badur-ul-Haque BALOCH  Takayoshi YAMAMOTO  

     
    PAPER-Neural Networks

      Vol:
    E76-A No:4
      Page(s):
    631-636

    In this work, we introduce a fuzzy inference in conventional backpropagation learning algorithm, for networks of neuron like units. This procedure repeatedly adjusts the learning parameters and leads the system to converge at the earliest possible time. This technique is appropriate in a sense that optimum learning parameters are being applied in every learning cycle automatically, whereas the conventional backpropagation doesn't contain any well-defined rule regarding the proper determination of the value of learning parameters.

  • A Current-Mode Circuit of a Chaotic Neuron Model

    Nobuo KANOU  Yoshihiko HORIO  Kazuyuki AIHARA  Shogo NAKAMURA  

     
    PAPER-Neural Networks

      Vol:
    E76-A No:4
      Page(s):
    642-644

    A model of a single neuron with chaotic dynamics is implemented with current-mode circuit design technique. The existence of chaotic dynamics in the circuit is demonstrated by simulation with SPICE3. The proposed circuit is suitable for implementing a chaotic neural network composed of such neuron models on a VLSI chip.

  • Error-Correction Learning of Three Layer Neural Networks Based on Linear-Homogeneous Expressions

    Ryuzo TAKIYAMA  Kimitoshi FUKUDOME  

     
    PAPER-Neural Networks

      Vol:
    E76-A No:4
      Page(s):
    637-641

    The three layer neural network (TLNN) is treated, where the nonlinearity of a neuron is of signum. First we propose an expression of the discriminant function of the TLNN, which is called a linear-homogeneous expression. This expression allows the differentiation in spite of the signum property of the neuron. Subsequently a learning algorithm is proposed based on the linear-homogeneous form. The algorithm is an error-correction procedure, which gives a mathematical foundation to heuristic error-correction learnings described in various literatures.

  • Multiple-Valued Memory Using Floating Gate Devices

    Takeshi SHIMA  Stephanie RINNERT  

     
    PAPER

      Vol:
    E76-C No:3
      Page(s):
    393-402

    This paper discusses multiple-valued memory circuit using floating gate devices. It is an object of the paper to provide a new and improved analog memory device, which permits the memory of an amount of charges that accurately corresponds to analog information to be stored.

  • Unsupervised Learning Algorithm for Fuzzy Clustering

    Kiichi URAHAMA  

     
    LETTER-Bio-Cybernetics

      Vol:
    E76-D No:3
      Page(s):
    390-391

    An adaptive algorithm is presented for fuzzy clustering of data. Partitioning is fuzzified by addition of an entropy term to objective functions. The proposed method produces more convex membership functions than those given by the fuzzy c-means algorithm.

  • A Theoretical Analysis of Neural Networks with Nonzero Diagonal Elements

    Masaya OHTA  Yoichiro ANZAI  Shojiro YONEDA  Akio OGIHARA  

     
    PAPER

      Vol:
    E76-A No:3
      Page(s):
    284-291

    This article analyzes the property of the fully interconnected neural networks as a method of solving combinatorial optimization problems in general. In particular, in order to escape local minimums in this model, we analyze theoretically the relation between the diagonal elements of the connection matrix and the stability of the networks. It is shown that the position of the global minimum point of the energy function on the hyper sphere in n dimensional space is given by the eigen vector corresponding the maximum eigen value of the connection matrix. Then it is shown that the diagonal elements of the connection matrix can be improved without loss of generality. The equilibrium points of the improved networks are classified according to their properties, and their stability is investigated. In order to show that the change of the diagonal elements improves the potential for the global minimum search, computer simulations are carried out by using the theoretical values. In according to the simulation result on 10 neurons, the success rate to get the optimum solution is 97.5%. The result shows that the improvement of the diagonal elements has potential for minimum search.

  • Text-Independent Speaker Recognition Using Neural Networks

    Hiroaki HATTORI  

     
    PAPER-Speech Processing

      Vol:
    E76-D No:3
      Page(s):
    345-351

    This paper describes a text-independent speaker recognition method using predictive neural networks. For text-independent speaker recognition, an ergodic model which allows transitions to any other state, including selftransitions, is adopted as the speaker model and one predictive neural network is assigned to each state. The proposed method was compared to quantization distortion based methods, HMM based methods, and a discriminative neural network based method through text-independent speaker identification experiments on 24 female speakers. The proposed method gave the highest identification rate of 100.0%, and the effectiveness of predictive neural networks for representing speaker individuality was clarified.

261-280hit(287hit)